-
Notifications
You must be signed in to change notification settings - Fork 12
Improve failure mode, add multiple DCs #1273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
The architecture we currently use allows us to deploy coordinators in 3 data centers and hence tolerate a failure of the whole data center. Data instances can be freely | ||
distributed in any way you want between data centers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would extend this with some notes on the expected system requirements, e.g., the latency should be under N ms 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's not necessary. Failover will be slower but slower network IMO still doesn't disqualify the architecture.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small typo + rewording.
## Data center failure | ||
|
||
The architecture we currently use allows us to deploy coordinators in 3 data centers and hence tolerate a failure of the whole data center. Data instances can be freely | ||
distributed in any way you want between data centers. The failover time will be slighlty increased due to the network communication needed. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Data center failure | |
The architecture we currently use allows us to deploy coordinators in 3 data centers and hence tolerate a failure of the whole data center. Data instances can be freely | |
distributed in any way you want between data centers. The failover time will be slighlty increased due to the network communication needed. | |
## Data center failure | |
The architecture we currently use allows us to deploy coordinators in 3 data centers and hence tolerate a failure of the whole data center. Data instances can be freely | |
distributed in any way you want between data centers. The failover time will be slightly increased due to the need for network communication. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also points to main
branch.
Feel free to merge the suggestion. The pr should get into main because it's not connected to anything special in memgraph 3-3 |
Cool @as51340, this is part of milestone 3.3, hence the comment. |
Release note
Documented types of failures tolerated with our current model of highly-available cluster. Documented possible architecture when multiple data centers are used.
Related product PRs
Checklist:
bugfix
orfeature
label, based on the product PR type you're documenting